Important: Red Hat Ceph Storage 4.2 Security and Bug Fix update

Synopsis

Important: Red Hat Ceph Storage 4.2 Security and Bug Fix update

Type/Severity

Security Advisory: Important

Topic

An update is now available for Red Hat Ceph Storage 4.2.

Red Hat Product Security has rated this update as having a security impact of Important. A Common Vulnerability Scoring System (CVSS) base score, which gives a detailed severity rating, is available for each vulnerability from the CVE link(s) in the References section.

Description

Red Hat Ceph Storage is a scalable, open, software-defined storage platform that combines the most stable version of the Ceph storage system with a Ceph management platform, deployment utilities, and support services.

The ceph-ansible package provides Ansible playbooks for installing, maintaining, and upgrading Red Hat Ceph Storage.

This package contains a new implementation of the original libtirpc, transport-independent RPC (TI-RPC) library for NFS-Ganesha.

NFS-GANESHA is a NFS Server running in user space. It comes with various back-end modules (called FSALs) provided as shared objects to support different file systems and name-spaces.

Security Fix(es):

  • ceph: User credentials can be manipulated and stolen by Native CephFS consumers of OpenStack Manila (CVE-2020-27781)
  • ceph: CEPHX_V2 replay attack protection lost (CVE-2020-25660)
  • ceph-ansible: insecure ownership on /etc/ceph/iscsi-gateway.conf configuration file (CVE-2020-25677)

For more details about the security issue(s), including the impact, a CVSS score, acknowledgments, and other related information, refer to the CVE page(s) listed in the References section.

Bug Fix(es):

These updated packages include numerous bug fixes. Space precludes documenting all of these changes in this advisory. Users are directed to the Red Hat Ceph Storage 4.2 Release Notes for information on the most significant of these changes:

https://access.redhat.com/documentation/en-us/red_hat_ceph_storage/4.2/html
/release_notes/

All users of Red Hat Ceph Storage are advised to upgrade to these updated
packages, which provide numerous bug fixes.

Solution

For details on how to apply this update, refer to:

https://access.redhat.com/articles/11258

Affected Products

  • Red Hat Enterprise Linux Server 7 x86_64
  • Red Hat Ceph Storage MON 4 for RHEL 8 x86_64
  • Red Hat Ceph Storage MON 4 for RHEL 7 x86_64
  • Red Hat Ceph Storage OSD 4 for RHEL 8 x86_64
  • Red Hat Ceph Storage OSD 4 for RHEL 7 x86_64
  • Red Hat Enterprise Linux for x86_64 8 x86_64
  • Red Hat Ceph Storage for Power 4 for RHEL 8 ppc64le
  • Red Hat Ceph Storage for Power 4 for RHEL 7 ppc64le
  • Red Hat Ceph Storage MON for Power 4 for RHEL 8 ppc64le
  • Red Hat Ceph Storage MON for Power 4 for RHEL 7 ppc64le
  • Red Hat Ceph Storage OSD for Power 4 for RHEL 8 ppc64le
  • Red Hat Ceph Storage OSD for Power 4 for RHEL 7 ppc64le
  • Red Hat Ceph Storage for IBM z Systems 4 s390x
  • Red Hat Ceph Storage MON for IBM z Systems 4 s390x
  • Red Hat Ceph Storage OSD for IBM z Systems 4 s390x

Fixes

  • BZ - 1582280 - RFE: Standard log collection via ceph-ansible
  • BZ - 1731158 - [RFE] multisite playbook to verify connectivity amongst two sites
  • BZ - 1763021 - Gettting warning messages while executing rbd CLI commands
  • BZ - 1774428 - Live image migration command "Abort" is not working as expected
  • BZ - 1774605 - Ceph 4 building outdated 8 years old version of python-repoze-lru
  • BZ - 1786106 - [iscsi]:avc denial on rbd-target-api from ioctl access
  • BZ - 1791911 - Validate host can proceed in NOTOK if Cluster Type was originally Development/POC
  • BZ - 1800382 - Support 2-site Stretch Clusters in RADOS
  • BZ - 1826690 - [Ceph-dashboard] Pool: Performance Details showing wrong capacity usage
  • BZ - 1828246 - [GSS]Ceph installation via Cockpit fails with "Systemd must be present"
  • BZ - 1829214 - ansible-runner-service does not remove hosts from previous runs
  • BZ - 1830375 - cpu stats incorrectly displayed
  • BZ - 1831299 - cephfs/Filesystem component fails when clicked on "clients" tab
  • BZ - 1831682 - [ansible-runner-service] : auto generated ssh_key permission hindering users to use ceph-ansible for day-2 operations
  • BZ - 1836431 - Support Deployment with Autoscaler on existing cluster
  • BZ - 1841436 - [RFE] Need support for including rgw interface without enabling multi-site option in multi-site cluster.
  • BZ - 1845501 - ls command hangs on nfs ganesha mountpoint with ERROR in ganesha log: FSAL :CRIT :Invoking unsupported FSAL operation
  • BZ - 1847166 - [RFE] Ceph ansible doesn't update crush map based on device classes
  • BZ - 1850947 - enabling RBD stats collection breaks ceph metrics endpoint
  • BZ - 1855148 - [Ceph-installer]: Set higher override values for rgw
  • BZ - 1855439 - [RFE] support crush rule during rgw replicated pool creation
  • BZ - 1855448 - [CodeChange]mgr: Improve internal python to c++ interface
  • BZ - 1856916 - mgr: make progress module more efficient
  • BZ - 1856960 - [Tool] Update the ceph-bluestore-tool for adding rescue procedure for bluefs log replay
  • BZ - 1856981 - 9GB of RAM are wasted for TASK [ceph-facts : set_fact ceph_current_status (convert to json)]
  • BZ - 1857414 - [RFE] msgr v2.1 changes
  • BZ - 1859180 - nfs-ganesha: rebase to 3.x
  • BZ - 1859679 - rebase ceph to 14.2.11
  • BZ - 1859872 - During upgrade moving from One RGW per node to Two RGW per node fails with ceph-ansible multisite configured
  • BZ - 1860057 - [RFE] Global- and pool-level configuration overrides should apply to in-use images
  • BZ - 1860073 - [RFE] Add support for snapshot-based mirroring
  • BZ - 1860739 - [container] OSD does not stay down after osd_max_markdown_count has been exceeded
  • BZ - 1861755 - [RHCS4] delete the unused files after purging cluster
  • BZ - 1866257 - [GSS][RADOS]MONs have slow/long running ops
  • BZ - 1866308 - [grafana] Some Grafana panels (AVG Disk, ...) in Host overview, Host details, OSD details etc. are displaying N/A or no data
  • BZ - 1866834 - deployment fails to deploy RGWs: Socket file not found
  • BZ - 1867697 - Optimize subtree pin checks in MDS
  • BZ - 1867698 - libcephfs client double-ref count decrement
  • BZ - 1868638 - [GSS][ceph-dashboard] When we select “This week” for Historical Data in Overall Performance graph, it is not showing any metrics (N/A) in dashboard
  • BZ - 1869797 - 'ceph-volume raw prepare' fails to prepare because ceph-osd cannot acquire lock on device
  • BZ - 1872006 - After restarting an mds, its standy-replay mds remained in the "resolve" state
  • BZ - 1872028 - mds should not recover files after normal session close
  • BZ - 1872030 - mds decoding of enum types on big-endian systems broken
  • BZ - 1872033 - mds parse dirfrag's ndist is always 0
  • BZ - 1872459 - Bucket listing: raise default rgw_bucket_index_max_aio to 128
  • BZ - 1872879 - Allow mapping and unmapping rbd images from a container without --net=host
  • BZ - 1873221 - Tracking BlueStore changes for RHCS 4.2
  • BZ - 1873915 - /var/log/tcmu-runner.log within tcmu-runner container does not get rotated and log grows without limit.
  • BZ - 1874756 - RGW: renaming a rgw user not happening on rhcs 4.2
  • BZ - 1875628 - mon: only take "in osds" into consideration when trimming osdmaps
  • BZ - 1875736 - [RGW][LC]: rgw daemon crash with objects tags
  • BZ - 1876692 - site-docker.yml playbook does not manage to login registry.redhat.io from behind a http proxy
  • BZ - 1876976 - [RGW][LC] 31 of 1k buckets are not being processed by lc
  • BZ - 1877300 - [ceph-volume] : ceph-volume simple scan failing on ceph-volume OSDs
  • BZ - 1877413 - ceph: problems with clusters containing nodes on s390x for some specific configurations and workloads
  • BZ - 1877737 - [RGW LC]: Lifecycle policy ignores non-current expiration for Prefix/TAG filters in versioned buckets
  • BZ - 1877745 - [dashboard][RGW]: Object Gateway > Buckets fails to load data
  • BZ - 1877910 - bluestore: follow-on changes for more intelligent DB space usage
  • BZ - 1878145 - ceph-mgr alerts module fails to send email: ALERTS_SMTP_ERROR
  • BZ - 1878250 - [RGW][NFS-Ganesha]: ls command exits with 'memory exhausted' error
  • BZ - 1878267 - [ceph-dashboard][RGW-Bucket]: Details section of a bucket with more num_shards exceeds the current pageview
  • BZ - 1878268 - librbd QoS throttling might result in an assertion failure
  • BZ - 1878271 - [ceph-dashboard]Monitoring - All Alerts: The alerts still seem to be loading
  • BZ - 1878500 - [GSS][ceph-volume] OSDs being deployed bluestore mixed type for SSD disks
  • BZ - 1879178 - ceph-ansible should bump to ansible 2.9
  • BZ - 1879819 - Test LDAP/AD support for RHCS Dashboard
  • BZ - 1879836 - Inventory file should be local to the ceph-ansible directory
  • BZ - 1880188 - bluestore: fix collection_list ordering
  • BZ - 1880252 - [ceph-ansible] docker registry password with special character quotation(') fails to log in when running ansible-playbook
  • BZ - 1880458 - [Ceph-volume] clone of bug 1877672 : Upgrade fail at [activate scanned ceph-disk osds and migrate to ceph-volume if deploying nautilus]
  • BZ - 1880476 - grafana-server group needs to be changed to grafana_server to ensure ansible compatibility with 2.10
  • BZ - 1881288 - osd daemon went down and no OSD daemon log in journalctl nor as a file
  • BZ - 1881313 - [ceph-ansible] playbook is failing when radosgw_num_instances is set to 2
  • BZ - 1881523 - fs to bs : SSDs were not zapped and thus not included in bluestore OSDs when osd_auto_discovery is set to true
  • BZ - 1882426 - OSD is crashed with assert - FAILED ceph_assert(cur >= p.length)
  • BZ - 1882484 - GC perfcounter fails to update when deletion occurs
  • BZ - 1882705 - rbd: make common options override krbd-specific options
  • BZ - 1883283 - 99k+ rgw.multimeta entries in bucket stats, cannot list bucket contents
  • BZ - 1884023 - list pending GCs is very slow
  • BZ - 1885693 - [ceph-mgr] missing dependency for the python-enum34 in the ceph-mgr-14.2.8-111.el7cp.x86_64 on the RHEL7
  • BZ - 1886461 - [RFE] Add to the Beast frontend an 'access log' line similar to CivetWeb.
  • BZ - 1886534 - add-osd playbook failied
  • BZ - 1886653 - The list of disallowed_leaders is not listed in mon dump in disallow mode
  • BZ - 1886670 - Monitor::_quorum_status() erroneously assumes the first monitor in quorum is the leader
  • BZ - 1886677 - Disallowed monitor state is lost on restart (DISALLOW and CONNECTIVITY modes)
  • BZ - 1887716 - mds containers failed to start post graceful shutdown of a complete ceph cluster.
  • BZ - 1889426 - [security] Set Dashboard in HTTPS by default
  • BZ - 1889668 - [RADOS]: Getting ambiguity messages by running enable_stretch_mode
  • BZ - 1889712 - [Ceph-dashboard] Update dashboard branding about version to 4.2
  • BZ - 1889963 - Buckets creation failed for rhel 8 in 4.2
  • BZ - 1890354 - CVE-2020-25660 ceph: CEPHX_V2 replay attack protection lost
  • BZ - 1890439 - [ceph-ansible][ceph-containers] can not view the log for containerized ceph daemon using journalctl
  • BZ - 1891098 - Configure "ceph health detail" to run periodically and log output to cluster log.
  • BZ - 1892108 - CVE-2020-25677 ceph-ansible: insecure ownership on /etc/ceph/iscsi-gateway.conf configuration file
  • BZ - 1892173 - Unable to retrieve the current connection scores via connection scores dump command
  • BZ - 1892295 - Global mirror snapshot schedules incorrectly tied to specific MGR instance
  • BZ - 1892387 - mon_host configuration with DNS hostname fails to load making cluster inaccessible
  • BZ - 1893740 - Improve scaling of snapshot mirror scheduler
  • BZ - 1893989 - ceph-ansible sets bad mode on files in /var/lib/ceph/mon/{cluster}-{mon_name}/
  • BZ - 1894702 - Unnecessary bilogs are left in sync-disabled buckets
  • BZ - 1896587 - Ability to turn off progress module
  • BZ - 1897125 - [Ceph-Installer] : Deployment fails due to mismatch in ansible version in ansible-runner container for 4.2
  • BZ - 1897995 - [GSS] mgr restful encountering AttributeError while humanifying command
  • BZ - 1898486 - ceph-ansible doesn't recreate /var/lib/ceph/osd/{cluster}-{id} when redeploying an OSD node
  • BZ - 1898599 - rgw: append operation will trigger the garbage collection mechanism and remove tails
  • BZ - 1898856 - Ceph-Ansible deployment failing with latest builds
  • BZ - 1899860 - [RGW][reshard] client.admin daemon crash seen on manual resharding a bucket
  • BZ - 1900109 - CVE-2020-27781 Ceph: User credentials can be manipulated and stolen by Native CephFS consumers of OpenStack Manila
  • BZ - 1901036 - [Ceph-dashboard] Default https based dashboard Ceph metric endpoint broken
  • BZ - 1902034 - Module 'crash' has failed: dictionary changed size during iteration
  • BZ - 1902149 - [Ceph-Ansible] iSCSI Deployment fails for the TASK [ceph-iscsi-gw : systemd start tcmu-runner, rbd-target-api and rbd-target-gw containers]
  • BZ - 1902281 - [ceph-ansible] : using add-mon.yml creates new monitor keyring on new monitor and thus doesn't get added
  • BZ - 1903612 - RBD fast-diff regression introduced due to snapshot-based mirroring changes
  • BZ - 1904340 - [ceph-dashboard] seen code crash in active mgr as ENGINE Error in HTTPServer.tick
  • BZ - 1904958 - [RGW : NFS] Unable to access certain buckets for a particular radosgw user "testuser" on the nfs mount.

CVEs

References